Active target sensing is the task of discovering and classifying an unknown number of targets in an environment and is critical in search-and-rescue missions. This paper develops a deep reinforcement learning approach to plan informative trajectories that increase the likelihood for an uncrewed aerial vehicle (UAV) to discover missing targets. Our approach efficiently (1) explores the environment to discover new targets, (2) exploits its current belief of the target states and incorporates inaccurate sensor models for high-fidelity classification, and (3) generates dynamically feasible trajectories for an agile UAV by employing a motion primitive library. Extensive simulations on randomly generated environments show that our approach is more efficient in discovering and classifying targets than several other baselines. A unique characteristic of our approach, in contrast to heuristic informative path planning approaches, is that it is robust to varying amounts of deviations of the prior belief from the true target distribution, thereby alleviating the challenge of designing heuristics specific to the application conditions.
translated by 谷歌翻译
经过良好策划的数据集的可用性推动了机器学习(ML)模型的成功。尽管对农业的地球观测数据的获取增加了,但仍有少数策划的标签数据集,这限制了其在训练ML模型中用于农业中的遥控模型的潜力。为此,我们介绍了一个首先的数据集,镰刀,在3个不同卫星的不同空间分辨率下具有时间序列图像,并用多个关键的裁剪参数注释,用于帕迪种植的帕迪耕种,用于泰米尔纳德邦的Cauvery Delta地区,印度。该数据集由388个独特地块的2398个季节样品组成,分布在三角洲的4个地区。该数据集涵盖了2018年1月3月2021日的时间段之间的多光谱,热和微波数据。稻田样品用4个关键的裁剪参数注释,即播种日期,移植日期,收获日期和作物收率。这是最早将生长季节(使用播种和收获日期)视为数据集的一部分的研究之一。我们还提出了一种产量预测策略,该策略使用基于观察到的生长季节以及该地区泰米尔纳德邦农业大学获得的标准季节性信息生成的时间序列数据。随之而来的绩效提高凸显了ML技术的影响,该技术利用了与特定地区的农民紧随其后的标准实践相一致的领域知识。我们在3个单独的任务上进行基准测试数据集,即作物类型,物候日期(播种,移植,收获)和产量预测,并开发了一个端到端框架,用于预测现实世界中的关键作物参数。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
Parkinson's disease is marked by altered and increased firing characteristics of pathological oscillations in the brain. In other words, it causes abnormal synchronous oscillations and suppression during neurological processing. In order to examine and regulate the synchronization and pathological oscillations in motor circuits, deep brain stimulators (DBS) are used. Although machine learning methods have been applied for the investigation of suppression, these models require large amounts of training data and computational power, both of which pose challenges to resource-constrained DBS. This research proposes a novel reinforcement learning (RL) framework for suppressing the synchronization in neuronal activity during episodes of neurological disorders with less power consumption. The proposed RL algorithm comprises an ensemble of a temporal representation of stimuli and a twin-delayed deep deterministic (TD3) policy gradient algorithm. We quantify the stability of the proposed framework to noise and reduced synchrony using RL for three pathological signaling regimes: regular, chaotic, and bursting, and further eliminate the undesirable oscillations. Furthermore, metrics such as evaluation rewards, energy supplied to the ensemble, and the mean point of convergence were used and compared to other RL algorithms, specifically the Advantage actor critic (A2C), the Actor critic with Kronecker-featured trust region (ACKTR), and the Proximal policy optimization (PPO).
translated by 谷歌翻译
Problem statement: Standardisation of AI fairness rules and benchmarks is challenging because AI fairness and other ethical requirements depend on multiple factors such as context, use case, type of the AI system, and so on. In this paper, we elaborate that the AI system is prone to biases at every stage of its lifecycle, from inception to its usage, and that all stages require due attention for mitigating AI bias. We need a standardised approach to handle AI fairness at every stage. Gap analysis: While AI fairness is a hot research topic, a holistic strategy for AI fairness is generally missing. Most researchers focus only on a few facets of AI model-building. Peer review shows excessive focus on biases in the datasets, fairness metrics, and algorithmic bias. In the process, other aspects affecting AI fairness get ignored. The solution proposed: We propose a comprehensive approach in the form of a novel seven-layer model, inspired by the Open System Interconnection (OSI) model, to standardise AI fairness handling. Despite the differences in the various aspects, most AI systems have similar model-building stages. The proposed model splits the AI system lifecycle into seven abstraction layers, each corresponding to a well-defined AI model-building or usage stage. We also provide checklists for each layer and deliberate on potential sources of bias in each layer and their mitigation methodologies. This work will facilitate layer-wise standardisation of AI fairness rules and benchmarking parameters.
translated by 谷歌翻译
The potential of offline reinforcement learning (RL) is that high-capacity models trained on large, heterogeneous datasets can lead to agents that generalize broadly, analogously to similar advances in vision and NLP. However, recent works argue that offline RL methods encounter unique challenges to scaling up model capacity. Drawing on the learnings from these works, we re-examine previous design choices and find that with appropriate choices: ResNets, cross-entropy based distributional backups, and feature normalization, offline Q-learning algorithms exhibit strong performance that scales with model capacity. Using multi-task Atari as a testbed for scaling and generalization, we train a single policy on 40 games with near-human performance using up-to 80 million parameter networks, finding that model performance scales favorably with capacity. In contrast to prior work, we extrapolate beyond dataset performance even when trained entirely on a large (400M transitions) but highly suboptimal dataset (51% human-level performance). Compared to return-conditioned supervised approaches, offline Q-learning scales similarly with model capacity and has better performance, especially when the dataset is suboptimal. Finally, we show that offline Q-learning with a diverse dataset is sufficient to learn powerful representations that facilitate rapid transfer to novel games and fast online learning on new variations of a training game, improving over existing state-of-the-art representation learning approaches.
translated by 谷歌翻译
我们为多机器人任务计划和分配问题提出了一种新的公式,该公式结合了(a)任务之间的优先关系; (b)任务的协调,允许多个机器人提高效率; (c)通过形成机器人联盟的任务合作,而单独的机器人不能执行。在我们的公式中,任务图指定任务和任务之间的关系。我们在任务图的节点和边缘上定义了一组奖励函数。这些功能对机器人联盟规模对任务绩效的影响进行建模,并结合一个任务的性能对依赖任务的影响。最佳解决此问题是NP-HARD。但是,使用任务图公式使我们能够利用最小成本的网络流量方法有效地获得近似解决方案。此外,我们还探索了一种混合整数编程方法,该方法为问题的小实例提供了最佳的解决方案,但计算上很昂贵。我们还开发了一种贪婪的启发式算法作为基准。我们的建模和解决方案方法导致任务计划,即使在与许多代理商的大型任务中,也利用任务优先关系的关系以及机器人的协调和合作来实现高级任务绩效。
translated by 谷歌翻译
DeepFake是指量身定制和合成生成的视频,这些视频现在普遍存在并大规模传播,威胁到在线可用信息的可信度。尽管现有的数据集包含不同类型的深击,但它们的生成技术各不相同,但它们并不考虑以“系统发育”方式进展。现有的深层面孔可能与另一个脸交换。可以多次执行面部交换过程,并且可以演变出最终的深层效果,以使DeepFake检测算法混淆。此外,许多数据库不提供应用的生成模型作为目标标签。模型归因通过提供有关所使用的生成模型的信息,有助于增强检测结果的解释性。为了使研究界能够解决这些问题,本文提出了Deephy,这是一种新型的DeepFake系统发育数据集,由使用三种不同的一代技术生成的5040个DeepFake视频组成。有840个曾经交换深击的视频,2520个换两次交换深击的视频和1680个换装深击的视频。使用超过30 GB的大小,使用1,352 GB累积内存的18 GPU在1100多个小时内准备了数据库。我们还使用六种DeepFake检测算法在Deephy数据集上展示了基准。结果突出了需要发展深击模型归因的研究,并将过程推广到各种深层生成技术上。该数据库可在以下网址获得:http://iab-rubric.org/deephy-database
translated by 谷歌翻译
由于医疗保健是关键方面,健康保险已成为最大程度地减少医疗费用的重要计划。此后,由于保险的增加,医疗保健行业的欺诈活动大幅增加,欺诈行业已成为医疗费用上升的重要贡献者,尽管可以使用欺诈检测技术来减轻其影响。为了检测欺诈,使用机器学习技术。美国联邦政府的医疗补助和医疗保险服务中心(CMS)在本研究中使用“医疗保险D部分”保险索赔来开发欺诈检测系统。在类不平衡且高维的Medicare数据集中使用机器学习算法是一项艰巨的任务。为了紧凑此类挑战,目前的工作旨在在数据采样之后执行功能提取,然后应用各种分类算法,以获得更好的性能。特征提取是一种降低降低方法,该方法将属性转换为实际属性的线性或非线性组合,生成较小,更多样化的属性集,从而降低了尺寸。数据采样通常用于通过扩大少数族裔类的频率或降低多数类的频率以获得两种类别的出现数量大约相等的频率来解决类不平衡。通过标准性能指标评估所提出的方法。因此,为了有效地检测欺诈,本研究将自动编码器作为特征提取技术,合成少数族裔过采样技术(SMOTE)作为数据采样技术,以及各种基于决策树的分类器作为分类算法。实验结果表明,自动编码器的结合,然后在LightGBM分类器上获得SMOTE,取得了最佳的结果。
translated by 谷歌翻译